10 research outputs found

    An Android application for crowdsourcing 3G user experience

    Full text link
    This report is composed by a project splited in two parts, a practical part and a research one. The first part of the project was done in Valencia (Spain) under the supervision of the company NUBESIS. We developed an Android application that is the mobile application for a website developed by the same company. We describe the problems we had while developing the application and the way we solved them. We will explain the diferent processes that take place in the application and how these processes are integrated in the application's functionality, we also talk about the user's interaction with the diferent screens and their behavior. The second part of the project is planned as a research project to improve the connectivity problem that can appear in the firrst application. This part was done in Sydney (Australia) in cooperation with the University of New South Wales (UNSW) and under the supervision of professor Mahbub Hassan. In this part we discuss the design and implementation of Android based 3G/HSDPA network bandwidth measurement mobile application. This application acts as a mobile sensor in a crowd sourcing system. We use a network link bandwidth estimation technique called packet pair probing, which can easily be implemented on a mobile platform and we also justify why we have chose the specific methodology after reviewing the related literature. We also propose a measurement initiation process with theMeasurement Server which allows the packet pair probing technique to reect an accurate download bandwidth on the measurement. We have calibrated and netune the measurement tool so it can contribute optimally to the crowd sourcing system by addressing issues such as usability, data consumption and power consumption. We include geo tags in each measurement we take and discuss the implementation issues addressed in the project. Finally, we introduce an algorithm which measures the download bandwidth in a timely fashion. We study the behaviour of the measurements by changing parameters such as packet size and packet train length. The results obtained were evaluated by comparing them to a reliable commercial bandwidth estimation tool under the same environment. Given these results we conducted a number of hypothesis tests where we used the T-statistic as the test statistic under the null hypothesis.Martínez Raga, M. (2011). An Android application for crowdsourcing 3G user experience. http://hdl.handle.net/10251/11987.Archivo delegad

    Improving the process of analysis and comparison of results in dependability benchmarks for computer systems

    Full text link
    Tesis por compendioLos dependability benchmarks (o benchmarks de confiabilidad en español), están diseñados para evaluar, mediante la categorización cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se evalúan en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (económicas, de reputación, o incluso de pérdida de vidas). Por esa razón, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusión, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisión de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparación de sistemas o componentes, existe un problema en el ámbito del dependability benchmarking relacionado con el análisis y la comparación de resultados. Mientras que el principal foco de investigación se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el análisis y la comparación de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este ámbito donde el proceso de análisis y la comparación de resultados entre sistemas se realiza de forma ambigua, mediante argumentación, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicación de los benchmarks de confiabilidad y realizar la explotación cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodología para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el análisis y comparación de resultados. Diseñada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodología integra el proceso de análisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del ámbito de la investigación operativa, esta metodología proporciona a los evaluadores los medios necesarios para hacer su proceso de análisis explícito, y más representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodología en varios casos de estudio de distintos dominios de aplicación, mostrará las contribuciones de este trabajo a mejorar el proceso de análisis y comparación de resultados en procesos de evaluación de la confiabilidad para sistemas basados en computador.Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador.Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945TESISCompendi

    Towards integrating multi-criteria analysis techniques in dependability benchmarking

    Full text link
    [EN] Increasing integration scales are promoting the development of myriads of new devices and technologies, such smartphones, ad hoc networks, or field-programmable devices, among others. The proliferation of such devices, with increasing autonomy and communication capabilities, is paving the way for a new paradigm known as Internet of Things, in which computing is ubiquitous and devices autonomously exchange information and cooperate among them and already existing IT infrastructures to improve people’s and society’s welfare. This new paradigm leads to huge business opportunities to manufacturers, application developers, and services providers in very different application domains, like consumer electronics, transport, or health. Accordingly, and to make the most of these incipient opportunities, industry relies more than ever on the use and re-use of commercial off-the-shelf (COTS), developed either in-house or by third parties, to decrease time-to-market and costs. In this race for hitting the market first, companies are nowadays concerned with the dependability of both COTS and final products, even for non-critical applications, as unexpected failures may damage the reputation of the manufacturer and limit the acceptability of their new products. Therefore, benchmarking techniques adapted to dependability contexts (dependability benchmarking) are being deployed in order to assess, compare, and select, i) the best suited COTS, among existing alternatives, to be integrated into a new product, and ii) the configuration parameters that gets the best trade-off between performance and dependability. However, although dependability benchmarking procedures have been defined and applied to a wide set of application domains, no rigorous and precise decision making process has been established yet, thus hindering the main goal of these approaches: the fair and accurate comparison and selection of existing alternatives taking into account both performance and dependability attributes. Indeed, results extracted from experimentation could be interpreted in so many different ways, according to the context of use of the system and the subjectivity of the benchmark analyser, that defining a clear and accurate decision making process is a must to enable the reproducibility of conclusions. Thus, this master thesis focuses on how integrating a decision making methodology into the regular dependability benchmarking procedure. The challenges to be faced include how to deal with the requirements from industry, just getting a single score characterising a system, and academia, getting as much measures as possible to accurately characterise the system, and how to navigate from one representation to another without losing meaningful information.[ES] El incremento de las escalas de integración están dando lugar a una infinidad de nuevos dispositivos y tecnologías, tales como smartphones, redes ad hoc, y dispositivos reprogramables entre otros. La proliferación de estos dispositivos con mejoras en autonomía y capacidades de comunicación está allanando el camino a un nuevo paradigma conocido como Internet of Things (el Internet de las cosas), donde la computación es ubícua y los dispositivos cooperan e intercambian información de forma autónoma entre ellos, y donde las infrastructuras para las TI mejoran el bienestar de la gente y de la sociedad. De la mano de este paradigma llegan una gran cantidad de oportunidades de negocio para fabricantes, desarrolladores de aplicaciones, y provedores de servicios en areas tan distintas como la electrónica de consumo, el transporte o la sanidad. De acuerdo con esto, y para sacar el mayor provecho de estas oportunidades, la industria depende ahora más que nunca de la utilización y reutilización de productos desarrollados por terceros, que les permiten reducir el tiempo de lanzamiento al mercado y los costes para sus productos. En esta carrera por ser el primero en llegar al mercado, las compañias se preocupan de la confiabilidad de tanto los componentes desarrollados por terceros, como de los propios productos finales, ya que fallos inesperados podrían perjudicar la reputación del fabricante y limitar la aceptación de sus nuevos productos. Por tanto, las técnicas de evaluación adaptadas al contexto de la confiabilidad se están desplegando para evaluar, comparar y seleccionar, i) aquellos componentes que mejor se ajustan para ser integrados en un nuevo producto, y ii) los parámetros de configuración que ofrecen el mejor equilibrio entre rendimiento y confiabilidad. Sin embargo, aunque los procesos de evaluación de la confiabilidad se han definido y aplicato a un gran conjunto de entornos de aplicación, todavía no se han establecido procesos precisos y rigurosos para llevar a cabo el proceso the toma de decisiones, dificultando así los objetivos de este tipo de aproximaciones: una comparación y una selección justa de las alternativas existentes tomando en consideración atributos del rendimiento y de la confiabilidad. De echo, los resultados extraídos de la experimentación se pueden interpretar de muchas maneras distintas dependiendo del contexto de uso del sistema, y del criterio subjetivo del evaluador. Por lo que definir un proceso de toma de decisiones claro y conciso es una tarea obligatoria para permitir la reproducibilidad de las conclusiones. Así pues, éste trabajo final de máster se centra en el proceso de integración de una metodología de toma de decisiones en un proceso de evaluación de la confiabilidad común. Los retos a afrontar incluyen cómo tratar con los requisitos de la industria, obteniendo una única medida para caracterizar el sistema, y con los requisitos de los académicos, donde se prefiere la obtención de cuantas más medidas posibles para caracterizar el sistema, y como navegar de una representación a la otra sin sufrir una pérdida de información relevante.Martínez Raga, M. (2013). Towards integrating multi-criteria analysis techniques in dependability benchmarking. http://hdl.handle.net/10251/39987Archivo delegad

    Gaining confidence on dependability benchmarks conclusions through back-to-back testing

    Full text link
    ©2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The main goal of any benchmark is to guide decisions through system ranking, but surprisingly little research has been focused so far on providing means to gain confidence on the analysis carried out with benchmark results. The inclusion of a back-to-back testing approach in the benchmark analysis process to compare conclusions and gain confidence on the final adopted choices seems convenient to cope with this challenge. The proposal is to look for the coherence of rankings issued from the application of independent multiple-criteria decision making (MCDM) techniques on results. Although any MCDM method can be potentially used, this paper reports our experience using the Logic Score of Preferences (LSP) and the Analytic Hierarchy Process (AHP). Discrepancies in provided rankings invalidate conclusions and must be tracked to discover incoherences and correct the related analysis errors. Once rankings are coherent, the underlying analysis also does, thus increasing our confidence on supplied conclusions.Work partially supported by the Spanish project ARENES (TIN2012-38308-C02-01).Martínez Raga, M.; Andrés Martínez, DD.; Ruiz García, JC. (2014). Gaining confidence on dependability benchmarks conclusions through back-to-back testing. IEEE. doi:10.1109/EDCC.2014.20

    From measures to conclusions using Analytic Hierarchy Process in dependability benchmarkind

    Full text link
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Dependability benchmarks are aimed at comparing and selecting alternatives in application domains where faulty conditions are present. However, despite its importance and intrinsic complexity, a rigorous decision process has not been defined yet. As a result, benchmark conclusions may vary from one evaluator to another, and often, that process is vague and hard to follow, or even nonexistent. This situation affects the repeatability and reproducibility of that analysis process, making difficult the cross-comparison of results between works. To mitigate these problems, this paper proposes the integration of the analytic hierarchy process (AHP), a widely used multicriteria decision-making technique, within dependability benchmarks. In addition, an assisted pairwise comparison approach is proposed to automate those aspects of AHP that rely on judgmental comparisons, thus granting consistent, repeatable, and reproducible conclusions. Results from a dependability benchmark for wireless sensor networks are used to illustrate and validate the proposed approach.This work was supported in part by the Spanish Project ARENES under Grant TIN2012-38308-C02-01 and in part by the Programa de Ayudas de Investigacion y Desarrollo through the Universitat Politecnica de Valencia, Valencia, Spain. The Associate Editor coordinating the review process was Dr. Dario Petri.Martínez Raga, M.; Andrés Martínez, DD.; Ruiz García, JC.; Friginal López, J. (2014). From measures to conclusions using Analytic Hierarchy Process in dependability benchmarkind. IEEE Transactions on Instrumentation and Measurement. 63(11):2548-2556. https://doi.org/10.1109/TIM.2014.2348632S25482556631

    Professionals' perceptions about healthcare resources for co-occuring disorders in Spain

    Get PDF
    Since provision of integrated services for patients with dual pathology or dual disorders (coexistence of an addictive disorder and other psychiatric disorders) constitutes an important challenge, this study compared the perceptions of health-care professionals with the existing, current state of specific resources for patients with dual pathology in Spain. Epidemiological, observational, cross-sectional, multicenter study with a large, representative sample of health care professionals attending patients with dual pathology in treatment resources throughout Spain. Participants completed a specifically designed ad - hoc on-line questionnaire about their perceptions on the existence of available resources and treatment needs for patients with dual pathology. To compare professionals' perceptions with existing available resources, the same on-line questionnaire was also completed by commissioners and managers responsible for national and regional healthcare plans on drug abuse. A total of 659 professionals, mostly psychologists (43.40%) or psychiatrists (32.93%) agreed to participate in the study. The highest degree of concordance between the perceptions of professional and the actual situation was found regarding the existence of mental health and addiction networks (either separately or unified) (74.48%), followed by specific workshops (73.08%) and sub-acute inpatient units (67.38%), specific hospitalization units (66.26%), detoxification units (63.15%) and outpatient programs (60.73%). We detected a lower degree of agreement regarding specific occupational rehabilitation centers (59.34%) day hospitals (58.93%), day centers (57.88%), outpatient intermediate resources (48.87%), psychiatric acute admission units (46.54%) and therapeutic communities (43.77%). In addition, on average, health care professionals underestimated the number of resources present in their respective communities. Relevant differences exist between the perceptions of professional and existing available resources for dual pathology patients in Spain, thus supporting the need of additional efforts and strategies to establish a registry and clearly inform about available resources for patients with dual diagnosis

    REFRAHN: A Resilience Evaluation Framework for Ad Hoc Routing Protocols

    Full text link
    [EN] Routing protocols are key elements for ad hoc networks. They are in charge of establishing routes between network nodes efficiently. Despite the interest shown by the scientific community and industry in converting the first specifications of ad hoc routing protocols in functional prototypes, aspects such as the resilience of these protocols remain generally unaddressed in practice. Tackling this issue becomes critical given the increasingly variety of accidental and malicious faults (attacks) that may impact the behaviour exhibited by ad hoc routing protocols. The main objective of this paper is to deepen in the methodological aspects concerning fault injection in routing protocols. As a result, we will design and implement a framework based on the injection of accidental and malicious faults to quantitatively evaluate their impact on routing protocols. This framework, called REFRAHN (Resilience Evaluation FRamework for Ad Hoc routiNg protocols), can be used to (i) reduce the uncertainty about the sources of perturbations in the deployment of ad hoc routing protocols, (ii) design fault-tolerant mechanisms that address and minimise such problems, and (iii) compare and select which is the routing protocol that optimises the performance and robustness of the network. 2015 Elsevier B.V. All rights reserved.Friginal, J.; Andrés Martínez, DD.; Ruiz García, JC.; Martínez Raga, M. (2015). REFRAHN: A Resilience Evaluation Framework for Ad Hoc Routing Protocols. Computer Networks. 82:114-134. doi:10.1016/j.comnet.2015.02.032S1141348

    Relative importance of Recall, Precision, F-Measure, Informedness, and Markedness metrics to evaluate security tools in Business Critical, Heightened Critical, Best Effort, and Minimum Effort scenarios, according to the declared preferences and familiarity with measures of experts in the domain

    Full text link
    Experts were asked to complete a Google Forms questionnaire (https://goo.gl/forms/EEmkUmLIj20nMJS33) to compare all 5 metrics in pairs for the 4 considered scenarios (40 comparisons). Two questions were defined for each pairwise comparison: i) which is the preferred metric between the two presented (A/B), and ii) which is the intensity of this preference (following Saaty's fundamental scale of absolute numbers: 1-5). Likewise, they declared their familiarity with considered metrics in a Likert 1-5 scale. This information is then used to compute each expert's individual judgement by i) computing the geometric mean for each row of her pairwise comparison matrix, ii) summing up all computed geometric means, and iii) dividing each geometric mean by the resulting sum. The result is a priority vector. The Consistency Ratio (CR) is computed in three successive steps: i) the Principal Eigen Vector (PEV) is calculated by multiplying the sum of the various columns of the pairwise comparison matrix and the weights contained in the priority vector, ii) a consistency index (CI) is deduced attending to the PEV and the number of metrics under study, and iii) the CR can be obtained by normalizing the CI to the random consistency index (RI) that is directly obtained from a table defined in T. L. Saaty, "Decision-making with the ahp: Why is the principal eigenvector necessary," European Journal of Operational Research, vol. 145, no. 1, pp. 85 – 91, 2003. Inconsistent matrices will not be taken into account (weight 0.00). The familiarity declared by each expert is used to compute, using the row geometric mean, the contribution (weight) that her preferences for metrics will have in each scenario. The weight of each metric for each scenario (consensus priority vector) is also be obtained using the weighted geometric mean.[EN] The benchmarking of security tools is endeavored to determine which tools are more suitable to detect system vulnerabilities or intrusions. The analysis process is usually oversimplified by employing just a single metric out of the large set of those available. Accordingly, the decision may be biased by not considering relevant information provided by neglected metrics. This work proposes a novel approach to take into account several metrics, different scenarios, and the advice of multiple experts. The proposal relies on experts quantifying the relative importance of each pair of metrics towards the requirements of a given scenario. Their judgments are aggregated using group decision making techniques, and pondered according to the familiarity of experts with the metrics and scenario, to compute a set of weights accounting for the relative importance of each metric. Then, weight-based multi-criteria-decision-making techniques can be used to rank the benchmarked tools. This dataset contains raw data obtained from 21 experts, who declared their familiarity with considered metrics and their preference for each metric in the considered scenarios. Processed data include the consistency ratio of resulting pairwise comparison matrices so inconsistent matrices are rejected - weight = 0.00), the relative contribution of each expert according to their declared familiarity with metrics and computed CRs, and the contribution (weight) of each metric towards each considered scenario.Martínez Raga, M.; Ruiz García, JC.; Antunes, N.; Andrés Martínez, DD.; Vieira, M. (2021). Relative importance of Recall, Precision, F-Measure, Informedness, and Markedness metrics to evaluate security tools in Business Critical, Heightened Critical, Best Effort, and Minimum Effort scenarios, according to the declared preferences and familiarity with measures of experts in the domain. Universitat Politècnica de València. https://doi.org/10.4995/Dataset/10251/16212

    Relevant Differences in Perception and Knowledge of Professionals in Different Spanish Autonomous Communities Regarding Availability of Resources for Patients with Dual disorders

    No full text
    OBJECTIVES: To assess the knowledge of health professionals attending patients with dual disorders about specific resources for patients with this condition in different Spanish regions. METHODS: Observational, cross-sectional, multicenter study to compare the perceptions of healthcare professionals (n=659) with reality regarding specific resources available for patients with dual disorders in Spain. The professionals completed an online questionnaire. Nineteen commissioners and managers responsible for national and regional substance abuse programs also completed the questionnaire. RESULTS: A representative sample of professionals from each community (553 centers in 235 Spanish cities) participated in the study. Most participants (93.2%) felt that specific resources for patients with dual disorders are needed. High percentages of professionals thought that there were no specific workshops (88.4%), subacute units (83.1%), day hospitals (82.8%), specific day centers (78.5%), or outpatient programs (73.2%) for patients with dual disorders. The real knowledge of professionals regarding the existence of specific resources varied according to the type of resource and autonomous community. The professionals generally underestimated the number of units available in their communities. CONCLUSIONS: There were clear differences in the real knowledge that healthcare professionals had about the resources available for patients with dual disorders in relation to the autonomous community where they were practicing. Actions are needed to harmonize knowledge nationally, for example, a single registry, white paper, or a national program for patients with dual disorders

    Long-term effect of a practice-based intervention (HAPPY AUDIT) aimed at reducing antibiotic prescribing in patients with respiratory tract infections

    No full text
    corecore